Privacy experts grappling with automated AI decision-making
Making decisions about individuals using computer algorithms is ubiquitous. But adding artificial intelligence to the mix makes things far more complicated.

Automated decision-making systems using artificial intelligence are proving to be a conundrum for privacy experts and organizations.
While automated decision-making systems that utilize algorithms have been around for some time, the vertiginous rise of AI has seen this suite of technologies increasingly gain traction in business and government.
Some employers use them to attract candidates and evaluate job performance, while the financial sector has implemented AI-based systems to help make loan and credit decisions. More businesses are expected to follow suit. A recent study by Statistics Canada found that nearly 11 per cent of all Canadian companies plan to use AI tools over the next 12 months to produce goods, streamline processes or deliver services, including AI-based decision-making.
Ottawa has also jumped into the fray. According to the Canadian Tracking Automated Government (TAG) register, federal government agencies have launched more than 300 projects and initiatives using AI. Most of the automated decision-making tools the feds use aim to make public administration more efficient and cost-effective, and assist rather than replace human decision-makers. For instance, the Department of Veteran Affairs uses it to help triage and speed up disability pension applications, immigration authorities employ it to help assess temporary resident visa applications, and others, including National Defence, use it to shortlist candidates to promote diversity in hiring.
“Governments in Canada have been moving a little bit more slowly and approaching automated decision systems starting first with systems that triage decisions,” says Teresa Scassa, the Canada research chair in information law and policy.
“But there's no reason to expect that we won't be moving further down that path to where machine-based systems are making decisions about us.”
However, as digital alchemy has become increasingly pervasive, a pandora’s box of concerns has come to the fore.
For one, AI-based automated decision-making can trigger consent and transparency issues. Organizations may rely on large datasets which can include personal information, and it may not always be realistic for organizations developing these new systems to seek individual consent, the only legal basis for collecting personal information in Canada, says Eloïse Gratton, the co-chair of Osler’s national privacy and data management practice.
This can lead to concerns about the potential misuse of personal information if the organization does not implement risk mitigation measures, such as removing sensitive personal information from its dataset.
Privacy experts say automated decision-making may also involve profiling individuals based on their data, which can lead to intrusive surveillance and the potential for creating detailed personal files without consent.
Unintended bias, stemming from coding and machine learning bias, could lead to harm and discriminatory practices in areas such as hiring, law enforcement, and healthcare, particularly for those who are most vulnerable “and less able to push back against the system,” says Scassa, who’s also a faculty member at the University of Ottawa’s Centre for Law, Technology and Society.
Case in point: Last year, a Chinese-based tutoring company agreed to settle a lawsuit filed by a U.S. government agency alleging it used hiring software powered by AI to reject older job applicants.
“Where automated decision-making can become an area of concern is if decisions are made that are going to affect the legal rights of an individual,” says Charles Morgan, the national co-leader of McCarthy Tétrault’s cyber/data group.
These are among the reasons it can be challenging for organizations using AI for decision-making to navigate and comply with privacy and data protection laws and regulations from different jurisdictions.
“These laws can be quite complex, and they are quickly evolving,” says Gratton.
Indeed, the global AI regulatory landscape is rapidly changing. The European Union’s Artificial Intelligence Act, the first-ever legal framework on AI, came into force last summer. The new legislation takes a tiered risk approach, with each tier subject to differing levels of regulation, from transparency notices to human oversight and technical provisions. It creates new compliance obligations for Canadian businesses targeting the European market.
As well, the world’s first legally binding international instrument on AI was negotiated by the 46 member states of the Council of Europe and 11 non-members, including Canada and the U.S. The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law establishes general obligations for actors in the AI chain operating in the EU market. It opened for signatures in September but has not yet been ratified.
“The fact that the international community came together to attempt to establish harmonized and binding rules related to the regulation of AI is significant in itself,” says Morgan.
“If Canada does become a signatory and does ratify this treaty, then the treaty provides for a binding obligation. So that will be significant.”
However, Ottawa's efforts to create a comprehensive AI regulatory framework have stalled. Bill C-27, which would have established a new privacy legal regime, created a new data protection tribunal, and introduced Canada’s first comprehensive attempt at regulating AI, was already on pause and died on the order paper when Parliament prorogued.
As a result, the status quo reigns in Canada. Soft law instruments such as the Treasury Board’s Directive on Automated Decision-Making and its accompanying Algorithmic Impact Assessment tool provide oversight and are designed to guide the adoption and use of automated decision-making in the federal government. But there’s also a hodgepodge combination of existing privacy law, human rights law, intellectual property law and tort law that can act as guardrails and partially regulate the AI industry in Canada.
“There are questions about whether the laws need another look in the AI context and whether human rights commissions need to be a better resource to deal with complex issues of AI bias,” says Scassa.
Canada’s Privacy Commissioner Philippe Dufresne would welcome buttressing privacy legislation and the power to make recommendations and issue orders, as the Standing Committee on Access to Information, Privacy and Ethics recommended. That said, he, along with his provincial counterparts and international colleagues, insists that existing legislation applies to AI and automated decision-making.
“Privacy law provides a number of principles that are absolutely critical and relevant and useful to organizations developing and using AI,” Dufresne says.
“But it's also the role of privacy commissioners such as myself to be there, to be able to provide guidance and to be able to hear from industry if there are some specific challenges and if we can assist them.”
Under privacy law, he says individuals should know what, how, when and why personal information is collected or used. Organizations using or developing AI systems should be able to explain any decision-making process used by those new technologies. But Dufresne acknowledges that is a challenge, as many AI automated decision-making systems operate like a black box where its internal workings are a mystery and the logic it uses to make a decision cryptic, even to those who created the model.
“Some models are not good at providing an explanation of the principal factors that went into the decision-making process,” says Morgan.
Quebec’s privacy legislation, known as Law 25 and deemed the most consumer-friendly in the country, may very well force organizations to provide explainability and transparency. Privacy pundits say the province’s privacy law, the subject of a major overhaul in 2021, could even serve as a template that addresses many of the concerns stemming from AI-based automated decision-making.
Heavily influenced by the 2018 European Union's General Data Protection Regulation (GDPR), Law 25 compels organizations to inform individuals when a decision is made exclusively through automated decision-making. That notification should occur when or before the decision is communicated to individuals. Further, individuals have a right to be informed about the personal information used to render the decision as well as the reasons, principal factors and parameters that led to the decision. Individuals must also be given the opportunity to submit observations regarding the automated decision and have a right to request corrections, which implies human oversight, to their personal information used in the decision-making process.
Constantine Karbaliotis, an expert in global privacy compliance and privacy management with nNovation LLP, says these requirements and their scope may be even broader than the GDPR.
Unlike the GDPR, whose rules around automated decisions apply only if a decision affects a person’s legal status or rights, Law 25 regulates any decision based exclusively on an automated processing of personal information, with no exceptions.
“You have to remember the rest of Law 25 also applies to automated decision-making,” he says.
“You have to provide access, correction, and deletion. Just because it's automated decision-making, just because it's hard, because you're pouring data into a training model or a large language model, does not mean that any of the other rights of the individual go away.”
Experts say there’s little doubt that the courts will be dealing with many of the tricky issues that arise from AI-automated decision-making. However, determining and establishing liability for AI-caused harm will be challenging. If an alleged harm was caused by an AI system that uses machine learning to make decisions without a “human in the loop,” general liability principles may be difficult to apply, even more so in cases dealing with black boxes.
Then there’s the matter of accountability, as complex AI systems involve many stakeholders in their development and deployment.
“If something goes wrong, who's ultimately accountable,” says Morgan.
“It’s going to be an intensely fact-based analysis to determine if there been an error or fault, who’s at fault, and what the relative responsibility of the players involved is.”
Scassa says that will also be a very costly exercise and out of the reach of many.
It is an admittedly complicated realm. But as Gratton notes, while AI has the potential to improve decision-making processes, “considering and addressing these privacy implications is essential to protect individuals’ rights and maintain public trust.”